Label smoothing is a regularization technique widely used in supervised learning to improve the generalization of models on various tasks, such as image classification and machine translation. However, the effectiveness of label smoothing in multi-hop question answering (MHQA) has yet to be well studied. In this paper, we systematically analyze the role of label smoothing on various modules of MHQA and propose F1 smoothing, a novel label smoothing technique specifically designed for machine reading comprehension (MRC) tasks. We evaluate our method on the HotpotQA dataset and demonstrate its superiority over several strong baselines, including models that utilize complex attention mechanisms. Our results suggest that label smoothing can be effective in MHQA, but the choice of smoothing strategy can significantly affect performance.
translated by 谷歌翻译
We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, instance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quantitative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challenging large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVISv1.0 dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision encoder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabilize the training and outperform the training from scratch counterpart with much fewer samples and less compute, providing a new direction for scaling up and accelerating the costly training of multi-modal foundation models. To facilitate future research, we release all the code and models at https://github.com/baaivision/EVA.
translated by 谷歌翻译
多视图无监督的特征选择(MUF)已被证明是一种有效的技术,可降低多视图未标记数据的维度。现有方法假定所有视图都已完成。但是,多视图数据通常不完整,即,某些视图中显示了一部分实例,但并非所有视图。此外,学习完整的相似性图,作为现有MUFS方法中重要的有前途的技术,由于缺少的观点而无法实现。在本文中,我们提出了一个基于互补的和共识学习的不完整的多视图无监督的特征选择方法(C $^{2} $ IMUFS),以解决上述问题。具体而言,c $^{2} $ imufs将功能选择集成到扩展的加权非负矩阵分解模型中,配备了自适应学习视图和稀疏的$ \ ell_ {2,p} $ - norm-norm,它可以提供更好的提供适应性和灵活性。通过从不同视图得出的多个相似性矩阵的稀疏线性组合,介绍了互补学习引导的相似性矩阵重建模型,以在每个视图中获得完整的相似性图。此外,c $^{2} $ imufs学习了跨不同视图的共识聚类指示器矩阵,并将其嵌入光谱图术语中以保留本地几何结构。现实世界数据集的全面实验结果证明了与最新方法相比,C $^{2} $ IMUF的有效性。
translated by 谷歌翻译
智力特性在经济发展中越来越重要。为了通过IP评估中的传统方法来解决疼痛点,我们正在以机器学习为核心开发一项新技术。我们已经建立了一个在线平台,并将在大湾地区扩展我们的业务。
translated by 谷歌翻译
标准平面(SP)定位对于常规临床超声(US)诊断至关重要。与2D US相比,3D US可以一次扫描获得多个视图平面,并通过添加冠状平面提供完整的解剖结构。但是,由于方向的可变性和巨大的搜索空间,在3D US中手动导航SPS是费力的和有偏见的。在这项研究中,我们介绍了3D US中自动SP本地化的新型增强学习(RL)框架。我们的贡献是三倍。首先,我们将3D中的SP定位作为RL中的基于切线的问题,以重组动作空间并大大降低搜索空间。其次,我们设计了一种辅助任务学习策略,以增强模型识别跨越平面搜索中非SPS和SP的微妙差异的能力。最后,我们通过同时利用空间和解剖学信息来提出空间 - 动态奖励,以有效地指导学习轨迹。我们探讨了我们方法在子宫和胎儿脑数据集上定位四个SP的功效。实验表明,我们的方法达到了较高的定位精度以及稳健的性能。
translated by 谷歌翻译
政策优化,通过大规模优化技术最大化价值函数来学习兴趣的政策,位于现代强化学习(RL)的核心。除了价值最大化之外,其他实际考虑因素也出现,包括令人鼓舞的探索,以及确保由于安全,资源和运营限制而确保学习政策的某些结构性。这些考虑通常可以通过诉诸正规化的RL来占据,这增加了目标值函数,并通过结构促进正则化术语。专注于无限范围打折马尔可夫决策过程,本文提出了一种用于解决正规化的RL的广义策略镜血压(GPMD)算法。作为策略镜血压LAN的概括(2021),所提出的算法可以容纳一般类凸常规的常规阶级,以及在使用中的规则器的认识到的广泛的Bregman分歧。我们展示了我们的算法在整个学习速率范围内,以无维的方式在全球解决方案的整个学习速率范围内融合到全球解决方案,即使常规器缺乏强大的凸起和平滑度。此外,在不精确的策略评估和不完美的政策更新方面,该线性收敛特征是可透明的。提供数值实验以证实GPMD的适用性和吸引力性能。
translated by 谷歌翻译
为了保留用户隐私,在实现移动智能的同时,已经提出了技术来培训有关分散数据的深神经网络。但是,对分散数据的培训使神经体系结构的设计非常困难。在设计和部署异质移​​动平台的不同神经体系结构时,这种困难将进一步扩大。在这项工作中,我们提出了一个自动的神经体系结构搜索,以分散的培训,这是一种新的DNN培训范式,称为联合神经建筑搜索,即Federated Nas。为了应对有限的客户计算和通信资源的主要挑战,我们提出了FedNAS,这是一个高度优化的有效联合NAS的框架。 FedNAS充分利用了在建筑搜索过程中重新训练模型候选人不足的关键机会,并结合了三个关键的优化:对偏见客户培训的平行候选人,早期降低了较不优点的候选人和动态的回合数。在大规模数据集和典型的CNN体​​系结构上测试,FedNAS可以达到可比较的模型精度作为最先进的NAS NAS算法,该算法训练具有集中式数据的模型,并且与直接的直线相比,最多将客户成本降低了两个幅度。联邦NAS的设计。
translated by 谷歌翻译
Model bias triggered by long-tailed data has been widely studied. However, measure based on the number of samples cannot explicate three phenomena simultaneously: (1) Given enough data, the classification performance gain is marginal with additional samples. (2) Classification performance decays precipitously as the number of training samples decreases when there is insufficient data. (3) Model trained on sample-balanced datasets still has different biases for different classes. In this work, we define and quantify the semantic scale of classes, which is used to measure the feature diversity of classes. It is exciting to find experimentally that there is a marginal effect of semantic scale, which perfectly describes the first two phenomena. Further, the quantitative measurement of semantic scale imbalance is proposed, which can accurately reflect model bias on multiple datasets, even on sample-balanced data, revealing a novel perspective for the study of class imbalance. Due to the prevalence of semantic scale imbalance, we propose semantic-scale-balanced learning, including a general loss improvement scheme and a dynamic re-weighting training framework that overcomes the challenge of calculating semantic scales in real-time during iterations. Comprehensive experiments show that dynamic semantic-scale-balanced learning consistently enables the model to perform superiorly on large-scale long-tailed and non-long-tailed natural and medical datasets, which is a good starting point for mitigating the prevalent but unnoticed model bias.
translated by 谷歌翻译
CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs). However, the inconsistency between the mixed images and the corresponding labels harms its efficacy. Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels, but inevitably introduce heavy training overhead or require extra information, undermining ease of use. To this end, we propose an efficient and effective Self-Motivated image Mixing method (SMMix), which motivates both image and label enhancement by the model under training itself. Specifically, we propose a max-min attention region mixing approach that enriches the attention-focused objects in the mixed images. Then, we introduce a fine-grained label assignment technique that co-trains the output tokens of mixed images with fine-grained supervision. Moreover, we devise a novel feature consistency constraint to align features from mixed and unmixed images. Due to the subtle designs of the self-motivated paradigm, our SMMix is significant in its smaller training overhead and better performance than other CutMix variants. In particular, SMMix improves the accuracy of DeiT-T/S, CaiT-XXS-24/36, and PVT-T/S/M/L by more than +1% on ImageNet-1k. The generalization capability of our method is also demonstrated on downstream tasks and out-of-distribution datasets. Code of this project is available at https://github.com/ChenMnZ/SMMix.
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译